74 research outputs found

    Convergence of the steepest descent method for minimizing quasiconvex functions

    Full text link
    To minimize a continuously differentiable quasiconvex function f : ℝ n →ℝ, Armijo's steepest descent method generates a sequence x k +1 = x k − t k ∇ f ( x k ), where t k >0. We establish strong convergence properties of this classic method: either , s.t. ; or arg min f = ∅, ∥ x k ∥ ↓ ∞ and f(x k )↓ inf f . We also discuss extensions to other line searches.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45245/1/10957_2005_Article_BF02192649.pd

    Convergence Analysis of Some Methods for Minimizing a Nonsmooth Convex Function

    Full text link
    In this paper, we analyze a class of methods for minimizing a proper lower semicontinuous extended-valued convex function . Instead of the original objective function f , we employ a convex approximation f k + 1 at the k th iteration. Some global convergence rate estimates are obtained. We illustrate our approach by proposing (i) a new family of proximal point algorithms which possesses the global convergence rate estimate even it the iteration points are calculated approximately, where are the proximal parameters, and (ii) a variant proximal bundle method. Applications to stochastic programs are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45249/1/10957_2004_Article_417694.pd

    Large-scale optimization with the primal-dual column generation method

    Get PDF
    The primal-dual column generation method (PDCGM) is a general-purpose column generation technique that relies on the primal-dual interior point method to solve the restricted master problems. The use of this interior point method variant allows to obtain suboptimal and well-centered dual solutions which naturally stabilizes the column generation. As recently presented in the literature, reductions in the number of calls to the oracle and in the CPU times are typically observed when compared to the standard column generation, which relies on extreme optimal dual solutions. However, these results are based on relatively small problems obtained from linear relaxations of combinatorial applications. In this paper, we investigate the behaviour of the PDCGM in a broader context, namely when solving large-scale convex optimization problems. We have selected applications that arise in important real-life contexts such as data analysis (multiple kernel learning problem), decision-making under uncertainty (two-stage stochastic programming problems) and telecommunication and transportation networks (multicommodity network flow problem). In the numerical experiments, we use publicly available benchmark instances to compare the performance of the PDCGM against recent results for different methods presented in the literature, which were the best available results to date. The analysis of these results suggests that the PDCGM offers an attractive alternative over specialized methods since it remains competitive in terms of number of iterations and CPU times even for large-scale optimization problems.Comment: 28 pages, 1 figure, minor revision, scaled CPU time

    An inexact bundle variant suited to column generation

    No full text
    International audienceWe give a bundle method for constrained convex optimization. Instead of using penalty functions, it shifts iterates towards feasibility, by way of a Slater point, assumed to be known. Besides, the method accepts an oracle delivering function and subgradient values with unknown accuracy. Our approach is motivated by a number of applications in column generation, in which constraints are positively homogeneous--so that zero is a natural Slater point--and an exact oracle may be time consuming. Finally, our convergence analysis employs arguments which have been little used so far in the bundle community. The method is illustrated on a number of cutting-stock problems
    corecore